Skip to content

Mahesh resign shreds#57

Open
gregcusack wants to merge 2 commits intomasterfrom
mahesh-resign-shreds
Open

Mahesh resign shreds#57
gregcusack wants to merge 2 commits intomasterfrom
mahesh-resign-shreds

Conversation

@gregcusack
Copy link
Owner

Problem

Summary of Changes

Fixes #

Copy link
Owner Author

@gregcusack gregcusack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you for putting this together and walking me through it! thoughts are below inline.mostly looks good! just a few stylistic things. and i think one bug when remainder_to_be_signed is 0. feel free to reach out with any questions/concerns.

let resigned = chained && is_last_in_slot;
// let resigned = chained && is_last_in_slot;
// only sign if last batch in slot and is chained
let sign_last_batch = chained && is_last_in_slot;
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid using batches because we are calling make_shreds_from_data() on batches. so i would call this sign_last_fec_set

shreds.extend({
// Create data chunks out of remaining data + padding.
let chunks = sig_data
.chunks(data_buffer_per_shred_size_signed)
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we don't want to resign data, this is going to be chunking in the wrong size

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on this, we will only get here if there is something to sign (sig_data will only have at most one fec_set worth of data). So it will only pull chunk. I could probably assert that.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yess you are right. messed that up. no need to assert

Comment on lines +1101 to +1104
let remainder_to_be_signed = data.len() % data_buffer_total_size;
if remainder_to_be_signed <= data_buffer_total_size_signed {
(data, sig_data) = data.split_at(data.len() - remainder_to_be_signed);
}
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when remainder_to_be_signed is 0, sig_data becomes 0. so we don't actually sign anything

// 2.) Shreds is_empty, which only happens when we entered w/ zero data.
//
// In either case, we want to generate empty data shreds.
if !data.is_empty() || shreds.is_empty() {
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactor this all the way down to line 1201 into a function and pass in sign_last_batch and chunk_size or something where chunk_size is data_buffer_per_shred_size or data_buffer_per_shred_size_signed or something like this

Comment on lines +1101 to +1104
let remainder_to_be_signed = data.len() % data_buffer_total_size;
if remainder_to_be_signed <= data_buffer_total_size_signed {
(data, sig_data) = data.split_at(data.len() - remainder_to_be_signed);
}
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think there is a bug here. when remainder_to_be_signed is 0, sig_data is empty. so nothing ends up getting signed.

Comment on lines +1097 to 1121
let mut sig_data: &[u8] = &[];
let mut shreds = {
let number_of_batches = data.len().div_ceil(data_buffer_total_size);
let number_of_batches = if sign_last_batch {
if data.len() > data_buffer_total_size_signed {
let remainder_to_be_signed = data.len() % data_buffer_total_size;
if remainder_to_be_signed <= data_buffer_total_size_signed {
(data, sig_data) = data.split_at(data.len() - remainder_to_be_signed);
}
else {
(data, sig_data) = data.split_at(data.len());
assert_eq!(sig_data.len(), 0);
}
data.len().div_ceil(data_buffer_total_size) + 1
} else {
sig_data = data;
data = &[];
1
}
} else {
data.len().div_ceil(data_buffer_total_size)
};
let total_num_shreds = SHREDS_PER_FEC_BLOCK * number_of_batches;
Vec::<Shred>::with_capacity(total_num_shreds)
};
stats.data_bytes += data.len();
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be honest, this is pretty hard to follow. what if we just did something like:

let (unsigned_data, signed_data) = if sign_last_fec_set {
    // Reserve at least one signed batch (may be empty) at the end.
    if data.len() > data_buffer_total_size_signed {
        let split_at = data.len() - data_buffer_total_size_signed; // sign everything except the last batch
        data.split_at(split_at)
    } else {
        (&[][..], data) // only enough data for one fec set, sign the whole thing
    }
} else {
    (data, &[][..]) // not last batch, so don't sign
};
stats.data_bytes += unsigned_data.len() + signed_data.len();

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and then do:

let unsigned_sets = unsigned_data.len().div_ceil(data_buffer_total_size);
let number_of_batches = if sign_last_fec_set {
    unsigned_sets + 1
} else {
    unsigned_sets
};
let mut shreds = Vec::<Shred>::with_capacity(SHREDS_PER_FEC_BLOCK * number_of_batches);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This very elegant. Love it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants